16 research outputs found

    Exploit fully automatic low-level segmented PET data for training high-level deep learning algorithms for the corresponding CT data

    Full text link
    We present an approach for fully automatic urinary bladder segmentation in CT images with artificial neural networks in this study. Automatic medical image analysis has become an invaluable tool in the different treatment stages of diseases. Especially medical image segmentation plays a vital role, since segmentation is often the initial step in an image analysis pipeline. Since deep neural networks have made a large impact on the field of image processing in the past years, we use two different deep learning architectures to segment the urinary bladder. Both of these architectures are based on pre-trained classification networks that are adapted to perform semantic segmentation. Since deep neural networks require a large amount of training data, specifically images and corresponding ground truth labels, we furthermore propose a method to generate such a suitable training data set from Positron Emission Tomography/Computed Tomography image data. This is done by applying thresholding to the Positron Emission Tomography data for obtaining a ground truth and by utilizing data augmentation to enlarge the dataset. In this study, we discuss the influence of data augmentation on the segmentation results, and compare and evaluate the proposed architectures in terms of qualitative and quantitative segmentation performance. The results presented in this study allow concluding that deep neural networks can be considered a promising approach to segment the urinary bladder in CT images.Comment: 20 page

    Deep Learning -- A first Meta-Survey of selected Reviews across Scientific Disciplines, their Commonalities, Challenges and Research Impact

    Full text link
    Deep learning belongs to the field of artificial intelligence, where machines perform tasks that typically require some kind of human intelligence. Similar to the basic structure of a brain, a deep learning algorithm consists of an artificial neural network, which resembles the biological brain structure. Mimicking the learning process of humans with their senses, deep learning networks are fed with (sensory) data, like texts, images, videos or sounds. These networks outperform the state-of-the-art methods in different tasks and, because of this, the whole field saw an exponential growth during the last years. This growth resulted in way over 10,000 publications per year in the last years. For example, the search engine PubMed alone, which covers only a sub-set of all publications in the medical field, provides already over 11,000 results in Q3 2020 for the search term 'deep learning', and around 90% of these results are from the last three years. Consequently, a complete overview over the field of deep learning is already impossible to obtain and, in the near future, it will potentially become difficult to obtain an overview over a subfield. However, there are several review articles about deep learning, which are focused on specific scientific fields or applications, for example deep learning advances in computer vision or in specific tasks like object detection. With these surveys as a foundation, the aim of this contribution is to provide a first high-level, categorized meta-survey of selected reviews on deep learning across different scientific disciplines. The categories (computer vision, language processing, medical informatics and additional works) have been chosen according to the underlying data sources (image, language, medical, mixed). In addition, we review the common architectures, methods, pros, cons, evaluations, challenges and future directions for every sub-category.Comment: 83 pages, 22 figures, 9 tables, 100 reference

    Anatomy Completor: A Multi-class Completion Framework for 3D Anatomy Reconstruction

    Full text link
    In this paper, we introduce a completion framework to reconstruct the geometric shapes of various anatomies, including organs, vessels and muscles. Our work targets a scenario where one or multiple anatomies are missing in the imaging data due to surgical, pathological or traumatic factors, or simply because these anatomies are not covered by image acquisition. Automatic reconstruction of the missing anatomies benefits many applications, such as organ 3D bio-printing, whole-body segmentation, animation realism, paleoradiology and forensic imaging. We propose two paradigms based on a 3D denoising auto-encoder (DAE) to solve the anatomy reconstruction problem: (i) the DAE learns a many-to-one mapping between incomplete and complete instances; (ii) the DAE learns directly a one-to-one residual mapping between the incomplete instances and the target anatomies. We apply a loss aggregation scheme that enables the DAE to learn the many-to-one mapping more effectively and further enhances the learning of the residual mapping. On top of this, we extend the DAE to a multiclass completor by assigning a unique label to each anatomy involved. We evaluate our method using a CT dataset with whole-body segmentations. Results show that our method produces reasonable anatomy reconstructions given instances with different levels of incompleteness (i.e., one or multiple random anatomies are missing). Codes and pretrained models are publicly available at https://github.com/Jianningli/medshapenet-feedback/ tree/main/anatomy-completorComment: 15 page

    Apple Vision Pro for Healthcare: "The Ultimate Display"? -- Entering the Wonderland of Precision Medicine

    Full text link
    At the Worldwide Developers Conference (WWDC) in June 2023, Apple introduced the Vision Pro. The Vision Pro is a Mixed Reality (MR) headset, more specifically it is a Virtual Reality (VR) device with an additional Video See-Through (VST) capability. The VST capability turns the Vision Pro also into an Augmented Reality (AR) device. The AR feature is enabled by streaming the real world via cameras to the (VR) screens in front of the user's eyes. This is of course not unique and similar to other devices, like the Varjo XR-3. Nevertheless, the Vision Pro has some interesting features, like an inside-out screen that can show the headset wearers' eyes to "outsiders" or a button on the top, called "Digital Crown", that allows you to seamlessly blend digital content with your physical space by turning it. In addition, it is untethered, except for the cable to the battery, which makes the headset more agile, compared to the Varjo XR-3. This could actually come closer to the "Ultimate Display", which Ivan Sutherland had already sketched in 1965. Not available to the public yet, like the Ultimate Display, we want to take a look into the crystal ball in this perspective to see if it can overcome some clinical challenges that - especially - AR still faces in the medical domain, but also go beyond and discuss if the Vision Pro could support clinicians in essential tasks to spend more time with their patients.Comment: This is a Preprint under CC BY. This work was supported by NIH/NIAID R01AI172875, NIH/NCATS UL1 TR001427, the REACT-EU project KITE and enFaced 2.0 (FWF KLI 1044). B. Puladi was funded by the Medical Faculty of the RWTH Aachen University as part of the Clinician Scientist Program. C. Gsaxner was funded by the Advanced Research Opportunities Program from the RWTH Aachen Universit

    The HoloLens in Medicine: A systematic Review and Taxonomy

    Full text link
    The HoloLens (Microsoft Corp., Redmond, WA), a head-worn, optically see-through augmented reality display, is the main player in the recent boost in medical augmented reality research. In medical settings, the HoloLens enables the physician to obtain immediate insight into patient information, directly overlaid with their view of the clinical scenario, the medical student to gain a better understanding of complex anatomies or procedures, and even the patient to execute therapeutic tasks with improved, immersive guidance. In this systematic review, we provide a comprehensive overview of the usage of the first-generation HoloLens within the medical domain, from its release in March 2016, until the year of 2021, were attention is shifting towards it's successor, the HoloLens 2. We identified 171 relevant publications through a systematic search of the PubMed and Scopus databases. We analyze these publications in regard to their intended use case, technical methodology for registration and tracking, data sources, visualization as well as validation and evaluation. We find that, although the feasibility of using the HoloLens in various medical scenarios has been shown, increased efforts in the areas of precision, reliability, usability, workflow and perception are necessary to establish AR in clinical practice.Comment: 35 pages, 11 figure

    Radiomics in head and neck cancer outcome predictions

    Get PDF
    The data are publicly available on The Cancer Image Archive (TCIA) [41] website and can be downloaded using the NBIA Data Retriever [42]: https://wiki.cancerimagingarchive.net/display/Public/Head-Neck-PET-CT, accessed on 11 October 2022. The source code is available on GitHub: https://github.com/MariaGoncalves3/Radiomics_for_Head_And_Neck_Cancer, accessed on 11 October 2022.Head and neck cancer has great regional anatomical complexity, as it can develop in different structures, exhibiting diverse tumour manifestations and high intratumoural heterogeneity, which is highly related to resistance to treatment, progression, the appearance of metastases, and tumour recurrences. Radiomics has the potential to address these obstacles by extracting quantitative, measurable, and extractable features from the region of interest in medical images. Medical imaging is a common source of information in clinical practice, presenting a potential alternative to biopsy, as it allows the extraction of a large number of features that, although not visible to the naked eye, may be relevant for tumour characterisation. Taking advantage of machine learning techniques, the set of features extracted when associated with biological parameters can be used for diagnosis, prognosis, and predictive accuracy valuable for clinical decision-making. Therefore, the main goal of this contribution was to determine to what extent the features extracted from Computed Tomography (CT) are related to cancer prognosis, namely Locoregional Recurrences (LRs), the development of Distant Metastases (DMs), and Overall Survival (OS). Through the set of tumour characteristics, predictive models were developed using machine learning techniques. The tumour was described by radiomic features, extracted from images, and by the clinical data of the patient. The performance of the models demonstrated that the most successful algorithm was XGBoost, and the inclusion of the patients’ clinical data was an asset for cancer prognosis. Under these conditions, models were created that can reliably predict the LR, DM, and OS status, with the area under the ROC curve (AUC) values equal to 0.74, 0.84, and 0.91, respectively. In summary, the promising results obtained show the potential of radiomics, once the considered cancer prognosis can, in fact, be expressed through CT scans.This work received funding from the Austrian Science Fund (FWF) KLI 678-B31: “enFaced-Virtual and Augmented Reality Training and Navigation Module for 3D-Printed Facial Defect Reconstructions”,FWF KLI 1044: “enFaced 2.0-Instant AR Tool for Maxillofacial Surgery” (https://enfaced2.ikim.nrw/, accessed on 11 October 2022), “CAMed” (COMET K-Project 871132), which is funded by the Austrian Federal Ministry of Transport, Innovation and Technology (BMVIT), the Austrian Federal Ministry for Digital and Economic Affairs (BMDW), the Styrian Business Promotion Agency (SFG), and the FCT-Fundação para a Ciência e a Tecnologia within the R&D Units Project Scope: UIDB/00319/2020. Further, we acknowledge the REACT-EU project KITE (Plattform für KI-Translation Essen, https://kite.ikim.nrw/, accessed on 11 October 2022)

    Synthetic skull bone defects for automatic patient-specific craniofacial implant design

    Get PDF
    Patient-specific craniofacial implants are used to repair skull bone defects after trauma or surgery. Currently, cranial implants are designed and produced by third-party suppliers, which is usually time-consuming and expensive. Recent advances in additive manufacturing made the in-hospital or in-operation-room fabrication of personalized implants feasible. However, the implants are still manufactured by external companies. To facilitate an optimized workflow, fast and automatic implant manufacturing is highly desirable. Data-driven approaches, such as deep learning, show currently great potential towards automatic implant design. However, a considerable amount of data is needed to train such algorithms, which is, especially in the medical domain, often a bottleneck. Therefore, we present CT-imaging data of the craniofacial complex from 24 patients, in which we injected various artificial cranial defects, resulting in 240 data pairs and 240 corresponding implants. Based on this work, automatic implant design and manufacturing processes can be trained. Additionally, the data of this work build a solid base for researchers to work on automatic cranial implant designs. Image Acquisition Matrix Size center dot Image Slice Thickness center dot craniofacial regionimaging technique center dot computed tomography Sample Characteristic - Organism Machine-accessible metadata file describing the reported data: https://doi.org/10.6084/m9.figshare.13265225This investigation was approved by the internal review board (IRB) of the Medical University of Graz, Austria (IRB: EK-30-340 ex 17/18). This work was supported by CAMed (COMET K-Project 871132), which is funded by the Austrian Federal Ministry of Transport, Innovation and Technology (BMVIT) and the Austrian Federal Ministry for Digital and Economic Affairs (BMDW) and the Styrian Business Promotion Agency (SFG). Furthermore, the Austrian Science Fund (FWF) KLI 678-B31: "enFaced: Virtual and Augmented Reality Training and Navigation Module for 3D-Printed Facial Defect Reconstructions" and the TU Graz LEAD Project "Mechanics, Modeling and Simulation of Aortic Dissection". Privatdozent Dr. Dr. Jan Egger was supported as Visiting Professor by the Overseas Visiting Scholars Program from the Shanghai Jiao Tong University (SJTU) in China. Finally, we thank Professor Hannes Deutschmann, MD, from the Department of Radiology - Division of Neuroradiology, Vascular and Interventional Neuroradiology of the Medical University of Graz, for having kindly provided us with the source CT datasets used in this work

    AutoImplant 2020-First MICCAI Challenge on Automatic Cranial Implant Design

    Full text link
    The aim of this paper is to provide a comprehensive overview of the MICCAI 2020 AutoImplant Challenge. The approaches and publications submitted and accepted within the challenge will be summarized and reported, highlighting common algorithmic trends and algorithmic diversity. Furthermore, the evaluation results will be presented, compared and discussed in regard to the challenge aim: seeking for low cost, fast and fully automated solutions for cranial implant design. Based on feedback from collaborating neurosurgeons, this paper concludes by stating open issues and post-challenge requirements for intra-operative use. The codes can be found at https://github.com/Jianningli/tmi
    corecore